Though many algorithms can be used to automatically summarize legal case decisions, most fail to incorporate domain knowledge about how important sentences in a legal decision relate to a representation of its document structure. For example, analysis of a legal case summarization dataset demonstrates that sentences serving different types of argumentative roles in the decision appear in different sections of the document. In this work, we propose an unsupervised graph-based ranking model that uses a reweighting algorithm to exploit properties of the document structure of legal case decisions. We also explore the impact of using different methods to compute the document structure. Results on the Canadian Legal Case Law dataset show that our proposed method outperforms several strong baselines.
translated by 谷歌翻译
扬声器在彼此保持一致的过程中建立了融洽的关系。在指导域材料的同时,已经证明了与教师的融洽关系,以促进学习。过去关于教育领域的词汇一致性的工作都在量化对齐方式的措施和与代理对齐的相互作用的类型中都遭受了限制。在本文中,我们采用基于数据驱动的共享表达式概念(可能由多个单词组成)的对齐措施,并比较一对一的人类机器人(H-R)相互作用的对齐方式与协作人类人类的H-R部分中的对齐方式-Orobot(H-H-R)相互作用。我们发现,H-R设置中的学生与H-H-R设置相比,与可教的机器人保持一致,并且词汇一致性和融洽关系之间的关系比以前的理论和经验工作所预测的要复杂。
translated by 谷歌翻译
对发展有说服力的文本的兴趣日益兴趣促进了自动化系统中的应用,例如辩论和论文评分系统;但是,从争论的角度来看,先前的工作挖掘图像说服力几乎没有。为了将说服力开采扩展到多模式领域,我们提出了一个多模式数据集,ImageArg,由推文中图像说服力的注释组成。注释是基于我们开发的说服分类法来探索图像功能和说服力的手段。我们使用广泛使用的多模式学习方法在Imakearg上基于图像说服力。实验结果表明,我们的数据集为这个丰富而充满挑战的主题提供了有用的资源,并且有足够的空间来建模改进。
translated by 谷歌翻译
生成法律文件摘要时,一项艰巨的任务是解决其论证性质的能力。我们介绍了一种简单的技术,可以通过将论点角色标签整合到摘要过程中来捕获法律文档的论证结构。使用预算语言模型的实验表明,我们提出的方法改善了强大基线的性能
translated by 谷歌翻译
Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we answer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.
translated by 谷歌翻译
In this work, we propose a novel generative model for mapping inputs to structured, high-dimensional outputs using structured conditional normalizing flows and Gaussian process regression. The model is motivated by the need to characterize uncertainty in the input/output relationship when making inferences on new data. In particular, in the physical sciences, limited training data may not adequately characterize future observed data; it is critical that models adequately indicate uncertainty, particularly when they may be asked to extrapolate. In our proposed model, structured conditional normalizing flows provide parsimonious latent representations that relate to the inputs through a Gaussian process, providing exact likelihood calculations and uncertainty that naturally increases away from the training data inputs. We demonstrate the methodology on laser-induced breakdown spectroscopy data from the ChemCam instrument onboard the Mars rover Curiosity. ChemCam was designed to recover the chemical composition of rock and soil samples by measuring the spectral properties of plasma atomic emissions induced by a laser pulse. We show that our model can generate realistic spectra conditional on a given chemical composition and that we can use the model to perform uncertainty quantification of chemical compositions for new observed spectra. Based on our results, we anticipate that our proposed modeling approach may be useful in other scientific domains with high-dimensional, complex structure where it is important to quantify predictive uncertainty.
translated by 谷歌翻译
Much computer vision research has focused on natural images, but technical documents typically consist of abstract images, such as charts, drawings, diagrams, and schematics. How well do general web search engines discover abstract images? Recent advancements in computer vision and machine learning have led to the rise of reverse image search engines. Where conventional search engines accept a text query and return a set of document results, including images, a reverse image search accepts an image as a query and returns a set of images as results. This paper evaluates how well common reverse image search engines discover abstract images. We conducted an experiment leveraging images from Wikimedia Commons, a website known to be well indexed by Baidu, Bing, Google, and Yandex. We measure how difficult an image is to find again (retrievability), what percentage of images returned are relevant (precision), and the average number of results a visitor must review before finding the submitted image (mean reciprocal rank). When trying to discover the same image again among similar images, Yandex performs best. When searching for pages containing a specific image, Google and Yandex outperform the others when discovering photographs with precision scores ranging from 0.8191 to 0.8297, respectively. In both of these cases, Google and Yandex perform better with natural images than with abstract ones achieving a difference in retrievability as high as 54\% between images in these categories. These results affect anyone applying common web search engines to search for technical documents that use abstract images.
translated by 谷歌翻译
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is one promising direction aimed at aligning a models representations with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization.
translated by 谷歌翻译
本文提出了2022年访问量的挑战的最终结果。 OOV竞赛介绍了一个重要方面,而光学角色识别(OCR)模型通常不会研究,即,在培训时对看不见的场景文本实例的识别。竞赛编制了包含326,385张图像的公共场景文本数据集的集合,其中包含4,864,405个场景文本实例,从而涵盖了广泛的数据分布。形成了一个新的独立验证和测试集,其中包括在训练时出词汇量不超出词汇的场景文本实例。竞争是在两项任务中进行的,分别是端到端和裁剪的文本识别。介绍了基线和不同参与者的结果的详尽分析。有趣的是,在新研究的设置下,当前的最新模型显示出显着的性能差距。我们得出的结论是,在此挑战中提出的OOV数据集将是要探索的重要领域,以开发场景文本模型,以实现更健壮和广义的预测。
translated by 谷歌翻译
我们提出了神经特征融合场(N3F),当将后者应用于分析多个图像作为3D场景时,可改善密集的2D图像特征提取器的方法。给定图像功能提取器,例如使用自学的预训练,N3F使用它作为老师来学习在3D空间中定义的学生网络。 3D学生网络类似于蒸馏所述功能的神经辐射领域,可以使用通常的可区分渲染机械进行培训。结果,N3F很容易适用于大多数神经渲染制剂,包括香草Nerf及其扩展到复杂的动态场景。我们表明,我们的方法不仅可以在不使用手动标签的情况下在场景特定的神经领域的上下文中实现语义理解,而且还可以始终如一地改善自我监督的2D基线。通过考虑各种任务,例如2D对象检索,3D细分和场景编辑,包括各种序列,包括史诗般的基金斯基准中的长期以上的视频,可以证明这一点。
translated by 谷歌翻译